perm filename AI[E87,JMC]2 blob
sn#843847 filedate 1987-08-02 generic text, type C, neo UTF8
COMMENT ā VALID 00004 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 ai[e87,jmc] AI and science
C00008 00003 Date: Fri, 3 Jul 87 07:29:41 pdt
C00016 00004 Date: Tue, 30 Jun 87 07:18:40 pdt
C00025 ENDMK
Cā;
ai[e87,jmc] AI and science
Like mathematics, philosophy and engineering, AI differs
from the (other) sciences. Whether it fits someone's definition
of a science or not, it has need of scientific methods including
controlled experimentation.
First of all, it seems to me that AI is properly part of
computer science. It concerns procedures for achieving goals
under certain conditions of information and possibility for action.
We can even consider it analogous to linear programming. Indeed if
achieving one's goals always consisted finding the values of
a collection of real variables that would minimize a linear
function of these variables subject to a collection of linear
inequalities, then AI would coincide with linear programming.
However, the relation between goals, available actions,
the information initially available and that can later be acquired
is sometimes more complex than in any of the branches of computer
sciences the main character of whose scientific treatment consists
of mathematical theorems. We don't have a mathematical formalization
of the general problem faced in AI let alone general mathematical
methods for their solution. Indeed what we know of human intelligence
doesn't suggest that a conventional mathematical formalization of
the problems intelligence is used to solve even exists. For this
reason AI is to a substantial degree an experimental science.
The fact that a general mathematical formalization of the problems
intelligence solves is unlikely doesn't make mathematics useless in AI.
Many aspects of intelligence are formalizable, and languages of
mathematical logic are useful for expressing facts about the common
sense world, and logical reasoning, especially as extended by non-monotonic
reasoning is useful for drawing conclusions.
In my view a large part of AI research should consist of the
identification and study of intellectual mechanisms, e.g. pattern
matching and learning. The problems whose computer solution exhibits
these mechanisms should be chosen for reasons of scientific perspicuousness
analogously to the fact that genetics uses fruit flies and bacteria.
A. S. Kronrod once said that chess is the {\it Drosophila} of artificial
intelligence. He might have been right, but the organizations that
support research have taken the view that problems should be chosen
for their practical importance. Sometimes it is as if the geneticists
were required to do their work with elephants on the grounds that
elephants are useful and fruit flies are not. Anyway chess has been
left to the sportsmen, most of whom only write programs, not scientific
papers and compete about who can get time on the largest computers or
get someone to support the construction of specialized chess computers.
Donald Norman's complaints about the way AI research is
conducted have some validity, but the problem of isolating
intellectual mechanisms and making experiments worth repeating is
yet to be solved, so it isn't just a question of deciding to
be virtuous.
Finally, I'll remark that AI is not the same as cognitive
psychology although the two studies are allied. AI concentrates
more on the necessary relations between means and ends, while
cognitive psychology concentrates on how humans and animals
achieve their goals. Any success in either endeavor helps the other.
Methodology in AI is worth studying, but acceptance of its results
should be moderated by memory of the behaviorist catastrophe in
psychology. Doctrines arising from methodological studies crippled the
science for half a century. Indeed psychology was only rescued by ideas
arising from the invention of the computer --- and at least partly ideas
originating in AI.
Date: Fri, 3 Jul 87 07:29:41 pdt
From: norman%ics@sdcsvax.ucsd.edu (Donald A. Norman)
Subject: Why AI is not a science
A private message to me in response to my recent AI List posting,
coupled with general observations lead me to realize why so many of us
otherwise friendly folks in the sciences that neighbor AI can be so
frustrated with AI's casual attitude toward theory: AI is not a science
and its practitioners are woefuly untutored in scientific method.
At the recent MIT conference on Foundations of AI, Nils Nilsson stated
that AI was not a science, that it had no empirical content, nor
claims to emperical content, that it said nothing of any emperical
value. AI, stated Nilsson, was engineering. No more, no less. (And
with that statement he left to catch an airplane, stopping further
discussion.) I objected to the statement, but now that I consider it
more deeply, I believe it to be correct and to reflect the
dissatisfaction people like me (i.e., "real scientists") feel with AI.
The problem is that most folks in AI think they are scientists and
think they have the competence to pronounce scientific theories about
almost any topic, but especially about psychology, neuroscience, or
language. Note that perfectly sensible dsciplines such as
mathematics and philosophy are also not sciences, at least not in the
normal intrerpretation of that word. It is no crime not to be a
science. The crime is to think you are one when you aren't.
AI worries a lot about methods and techniques, with many books and
articles devoted to these issues. But by methods and techniques I
mean such topics as the representation of knowledge, logic,
programming, control structures, etc. None of this method includes
anything about content. And there is the flaw: nobody in the field of
Artificial Intelligence speaks of what it means to study intelligence,
of what scientific methods are appropriate, what emprical methods are
relevant, what theories mean, and how they are to be tested. All the
other sciences worry a lot about these issues, about methodology,
about the meaning of theory and what the appropriate data collection
methods might be. AI is not a science in this sense of the word.
Read any standard text on AI: Nilsson or Winston or Rich or
even the multi-volumned handbook. Nothing on what it means to
test a theory, to compare it with others, nothing on what
constitutes evidence, or with how to conduct experiments.
Look at any science and you will find lots of books on
experimental method, on the evaluation of theory. That is why
statistics are so important in psychology or biology or
physics, or why counterexamples are so important in
linguistics. Not a word on these issues in AI.
The result is that practitioners of AI have no experience in the
complexity of experimental data, no understanding of scientific
method. They feel content to argue their points through rhetoric,
example, and the demonstration of programs that mimic behavior thought
to be relevant. Formal proof methods are used to describe the formal
power of systems, but this rigor in the mathematical analysis is not
matched by any similar rigor of theoretical analysis and evaluation
for the content.
This is why other sciences think that folks in AI are off-the-wall,
uneducated in scientific methodology (the truth is that they are), and
completely incompetent at the doing of science, no matter how
brilliant at the development of mathematics of representation or
formal programming methods. AI will contribute to the A, but will
not contribute to the I unless and until it becomes a science and
develops an appreciation for the experimental methods of science. AI
might very well develop its own methods -- I am not trying to argue
that existing methods of existing sciences are necessarily appropriate
-- but at the moment, there is only clever argumentation and proof
through made-up example (the technical expression for this is "thought
experiment" or "gadanken experiment"). Gedanken experiments are not
accepted methods in science: they are simply suggestive for a source
of ideas, not evidence at the end.
don norman
Donald A. Norman
Institute for Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093
norman@nprdc.arpa {decvax,ucbvax,ihnp4}!sdcsvax!ics!norman
norman@sdics.ucsd.edu norman%sdics.ucsd.edu@RELAY.CS.NET
------------------------------
End of AIList Digest
********************
Date: Tue, 30 Jun 87 07:18:40 pdt
From: norman%ics@sdcsvax.ucsd.edu (Donald A. Norman)
Subject: On how AI answers psychological issues
A comment on sin in AI, or "Why did the $6M man run so slowly
AI researchers seem to like the sin of armchair reasoning. It's a
pleasant sin: comfortable, fun, stimulating. And nobody can ever be
proven right or wrong. Most scientists, on the other hand, believe
that real answers are generated through the collection of data,
interpreted by validated theories.
The question "why did the $6M man run so slowly" is a case in point,
but my answer is also stimulated by the conference on "Foundation of
AI" that I just attended (held at MIT, arguing about the several
theoretical approaches to the representationa and simulation of
intelligence). In AIlist, many folks have let forth their theories.
Some are clever, some are interesting. Some are probably right, some
are probably wrong. How would one ever know which? Letting forth
with opinions is no way to answer a scientific question.
At the conference, many of AI's most famed luminaries let forth with
their opinions. Psychological phenomena made up and explained faster
than the speed of thought. Same observation applies. The only thing
worse is when a researcher (in any discipline) becomes a parent. then
the theories spin wildly and take the form: my child did the following
thing; therefore, all children do it; and therefore here is how the
mind works.
Same for why the $6M man ran so slowly. If you really want to know
why slow motion was used, ASK THE FILM MAKER ! (producer, camerman,
editor, director). The film maker selected this method for one of
several possible reasons, and armchair reasoning about it will get you
nowhere. It might have been to stretch out the film, for budgetary
reasons, because they didn't know anything else to do, because they
accidentally hit the slow-motion switch once and, once they got
started on this direction, all future films had to be consistent, etc.
One suspects that filmmakers did not go through the long elaborated
reasoning that some of the respondents assumed. Whatever the reason,
the best (and perhaps only) way to find out is to ask the people who
made the decision. Of course, they themselves may not know, given
that much of our actions are not consciously known to us and do not
necessarly follow from neat declarative rurles stored in some nice
simple memory format (which is why expert systems methodology is
fundamentally flawed, but that is another story), but at least the
verbally described reasons can give you a starting point.
Note that the discussion has confounded several different questions.
One question is "why did the film makers chose to use slow motion?" A
second question is, given that they made that choice, "Why does the
slow motion presentation of speeded motion produce a reasonable
efffect on the viewer?" Here the answer can only come about through
experimentation. However, for this question, the armchair
explanations make more sense and can start out as a plausible set of
hypotheses to be examined.
A third question has gotten raised in the discusion, which is "during
times of stress, or incipient danger, or doing a rapid task when very
well skilled, does subjective time pass more slowly?" This is an
oft-reported finding. Damn-near impossible to test. (Possible,
though: subjective time, for example, changes with body temperature,
going faster when body temperature is raised, slower when lowered, and
since it is possible to determine that fact experimentally, you should
be able to determine the other). The nature of subjective time is
most complex, but evidence would have it that filled time passes quite
differently than unfilled time, and the expert or person intensly
focusssed upon events is apt to attend to details not normally
visible, hence filling the time interval with numerous more activity
and events, hence changing th perception of time.
But before you all bombard the net with lots of anectodes about what
it felt like when in you auto accient, or skiing incident or ..., let
me remind you that the experience you have DURING the event itself, is
quite different from your memory of that experience. The
esdperimental research on time perception shows that subjective
durations can reverse. ( Events that may be boring to experence --
time passes every so slowly -- may be judged to have taken almost no
time at all in future retrospections -- no remembered events. Events
with numerous things happening -- so quickly that you didn't have time
to respond to most of them -- in retropsect may seem to have taken
forever.)
The moral is that understanding the human (or animal) mind is most
difficult, it is apt to come about only through a combination of
experimental study, theoretical modeling, and simulation, and armchair
thinking, while fun, is pretty irrelevant to the endeavor.
Psychology, the field, can be frustrating to the non-participant.
Many tedious experiments. Dumb experiments. An insistence on
methodology that borders on the insane. And an apparent inability to
answer even the simplest questions. Guilty. But for reason. Thinking
about "how the mind works" is fun, but not science, not the way to get
to the correct answer.
don norman
Donald A. Norman
Institute for Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093
norman@nprdc.arpa {decvax,ucbvax,ihnp4}!sdcsvax!ics!norman
norman@sdics.ucsd.edu norman%sdics.ucsd.edu@RELAY.CS.NET
------------------------------
End of AIList Digest
********************